Goto

Collaborating Authors

 possibility theory


Resolving Zadehs Paradox Axiomatic Possibility Theory as a Foundation for Reliable Artificial Intelligence

Oleksii, Bychkov, Sophia, Bychkova, Khrystyna, Lytvynchuk

arXiv.org Artificial Intelligence

This work advances and substantiates the thesis that the resolution of this crisis lies in the domain of possibility theory, specifically in the axiomatic approach developed in Bychkovs article. Unlike numerous attempts to fix Dempster rule, this approach builds from scratch a logically consistent and mathematically rigorous foundation for working with uncertainty, using the dualistic apparatus of possibility and necessity measures. The aim of this work is to demonstrate that possibility theory is not merely an alternative, but provides a fundamental resolution to DST paradoxes. A comparative analysis of three paradigms will be conducted probabilistic, evidential, and possibilistic. Using a classic medical diagnostic dilemma as an example, it will be shown how possibility theory allows for correct processing of contradictory data, avoiding the logical traps of DST and bringing formal reasoning closer to the logic of natural intelligence.


Maxitive Donsker-Varadhan Formulation for Possibilistic Variational Inference

Singh, Jasraj, Wongso, Shelvia, Houssineau, Jeremie, Chérief-Abdellatif, Badr-Eddine

arXiv.org Machine Learning

V ariational inference (VI) is a cornerstone of modern Bayesian learning, enabling approximate inference in complex models that would otherwise be intractable. However, its formulation depends on expectations and divergences defined through high-dimensional integrals, often rendering analytical treatment impossible and necessitating heavy reliance on approximate learning and inference techniques. Possibility theory, an imprecise probability framework, allows to directly model epistemic uncertainty instead of leveraging subjective probabilities. While this framework provides robustness and interpretability under sparse or imprecise information, adapting VI to the possibilistic setting requires rethinking core concepts such as entropy and divergence, which presuppose additivity. In this work, we develop a principled formulation of possibilistic variational inference and apply it to a special class of exponential-family functions, highlighting parallels with their probabilistic counterparts and revealing the distinctive mathematical structures of possibility theory.


A Concept of Possibility for Real-World Events

Schwartz, Daniel G.

arXiv.org Artificial Intelligence

This paper offers a new concept of {\it possibility} as an alternative to the now-a-days standard concept originally introduced by L.A. Zadeh in 1978. This new version was inspired by the original but, formally, has nothing in common with it other than that they both adopt the Łukasiewicz multivalent interpretation of the logical connectives. Moreover, rather than seeking to provide a general notion of possibility, this focuses specifically on the possibility of a real-world event. An event is viewed as having prerequisites that enable its occurrence and constraints that may impede its occurrence, and the possibility of the event is computed as a function of the probabilities that the prerequisites hold and the constraints do not. This version of possibility might appropriately be applied to problems of planning. When there are multiple plans available for achieving a goal, this theory can be used to determine which plan is most possible, i.e., easiest or most feasible to complete. It is speculated that this model of reasoning correctly captures normal human reasoning about plans. The theory is elaborated and an illustrative example for vehicle route planning is provided. There is also a suggestion of potential future applications.


Improving Active Learning with a Bayesian Representation of Epistemic Uncertainty

Thomas, Jake, Houssineau, Jeremie

arXiv.org Artificial Intelligence

A popular strategy for active learning is to specifically target a reduction in epistemic uncertainty, since aleatoric uncertainty is often considered as being intrinsic to the system of interest and therefore not reducible. Yet, distinguishing these two types of uncertainty remains challenging and there is no single strategy that consistently outperforms the others. We propose to use a particular combination of probability and possibility theories, with the aim of using the latter to specifically represent epistemic uncertainty, and we show how this combination leads to new active learning strategies that have desirable properties. In order to demonstrate the efficiency of these strategies in non-trivial settings, we introduce the notion of a possibilistic Gaussian process (GP) and consider GP-based multiclass and binary classification problems, for which the proposed methods display a strong performance for both simulated and real datasets.


Redesigning the ensemble Kalman filter with a dedicated model of epistemic uncertainty

Kimchaiwong, Chatchuea, Houssineau, Jeremie, Johansen, Adam M.

arXiv.org Artificial Intelligence

The problem of incorporating information from observations received serially in time is widespread in the field of uncertainty quantification. Within a probabilistic framework, such problems can be addressed using standard filtering techniques. However, in many real-world problems, some (or all) of the uncertainty is epistemic, arising from a lack of knowledge, and is difficult to model probabilistically. This paper introduces a possibilistic ensemble Kalman filter designed for this setting and characterizes some of its properties. Using possibility theory to describe epistemic uncertainty is appealing from a philosophical perspective, and it is easy to justify certain heuristics often employed in standard ensemble Kalman filters as principled approaches to capturing uncertainty within it. The possibilistic approach motivates a robust mechanism for characterizing uncertainty which shows good performance with small sample sizes, and can outperform standard ensemble Kalman filters at given sample size, even when dealing with genuinely aleatoric uncertainty.


The Sigma-Max System Induced from Randomness and Fuzziness

Mei, Wei, Li, Ming, Cheng, Yuanzeng, Liu, Limin

arXiv.org Artificial Intelligence

This paper managed to induce probability theory (sigma system) and possibility theory (max system) respectively from randomness and fuzziness, through which the premature theory of possibility is expected to be well founded. Such an objective is achieved by addressing three open key issues: a) the lack of clear mathematical definitions of randomness and fuzziness; b) the lack of intuitive mathematical definition of possibility; c) the lack of abstraction procedure of the axiomatic definitions of probability/possibility from their intuitive definitions. Especially, the last issue involves the question why the key axiom of "maxitivity" is adopted for possibility measure. By taking advantage of properties of the well-defined randomness and fuzziness, we derived the important conclusion that "max" is the only but un-strict disjunctive operator that is applicable across the fuzzy event space, and is an exact operator for fuzzy feature extraction that assures the max inference is an exact mechanism. It is fair to claim that the long-standing problem of lack of consensus to the foundation of possibility theory is well resolved, which would facilitate wider adoption of possibility theory in practice and promote cross prosperity of the two uncertainty theories of probability and possibility. Randomness and fuzziness are well recognized as two kinds of fundamental uncertainties of this world. It remains as an open topic on how to correctly comprehend these uncertainties and effectively handle them in practice. For modeling of random uncertainty, probability theory and the derivative subjects of statistics and stochastic process are no doubt the classic tool set. Probability theory, which satisfies the key axiom of "additivity" [18,23], has grown up to be mature, upon which nearly the whole building of information sciences is based and applications of which could be found over a great diversity of communities [22, 29,41,42,52,53].


Belief functions induced by random fuzzy sets: Application to statistical inference

Denoeux, Thierry

arXiv.org Artificial Intelligence

It is based on the representation of elementary pieces of evidence by belief functions (defined as completely monotone set functions) and on their combination by an operator called the product-intersection rule, or Dempster's rule of combination. A belief function can be constructed by comparing a piece evidence to a scale of canonical examples such as randomly coded messages, whose meanings are determined by chance [40]. A belief function on a set Θ can be seen as being induced by a multi-valued mapping from a probability space to Ω; it is mathematically equivalent to a random set [5, 34]. As rational beliefs are essentially determined by evidence, the Dempster-Shafer (DS) theory can be regarded as a general framework for reasoning with uncertainty [11]. Shortly after the introduction of DS theory, Zadeh independently proposed another formalism, called Possibility Theory [54], in which the concept of "fuzzy restriction" plays a


Compatible-Based Conditioning in Interval-Based Possibilistic Logic

Benferhat, Salem (Artois University) | Levray, Amélie (Artois University) | Tabia, Karim (Artois University) | Kreinovich, Vladik ( University of Texas at El Paso )

AAAI Conferences

Interval-based possibilistic logic is a flexible setting extending standard possibilistic logic such that each logical expression is associated with a sub-interval of [0,1]. This paper focuses on the fundamental issue of conditioning in the interval-based possibilistic setting. The first part of the paper first proposes a set of natural properties that an interval-based conditioning operator should satisfy. We then give a natural and safe definition for conditioning an interval-based possibility distribution. This definition is based on applying standard min-based or product-based conditioning on the set of all associated compatible possibility distributions. We analyze the obtained posterior distributions and provide a precise characterization of lower and upper endpoints of the intervals associated with interpretations. The second part of the paper provides an equivalent syntactic computation of interval-based conditioning when interval-based distributions are compactly encoded by means of interval-based possibilistic knowledge bases. We show that interval-based conditioning is achieved without extra computational cost comparing to conditioning standard possibilistic knowledge bases.


The Cube of Opposition: A Structure Underlying Many Knowledge Representation Formalisms

Dubois, Didier (IRIT, University of Toulouse) | Prade, Henri (IRIT, University of Toulouse) | Rico, Agnès (ERIC, Université Claude Bernard Lyon 1)

AAAI Conferences

The square of opposition is a structure involving two involutive negations and relating quantified statements, invented in Aristotle time. Rediscovered in the second half of the XXth century, and advocated as being of interest for understanding conceptual structures and solving problems in paraconsistent logics, the square of opposition has been recently completed into a cube, which corresponds to the introduction of a third negation. Such a cube can be encountered in very different knowledge representation formalisms, such as modal logic, possibility theory in its all-or-nothing version, formal concept analysis, rough set theory and abstract argumentation. After restating these results in a unified perspective, the paper proposes a graded extension of the cube and shows that several qualitative, as well as quantitative formalisms, such as Sugeno integrals used in multiple criteria aggregation and qualitative decision theory, or yet belief functions and Choquet integrals, are amenable to transformations that form graded cubes of opposition. This discovery leads to a new perspective on many knowledge representation formalisms, laying bare their underlying common features. The cube of opposition exhibits fruitful parallelisms between different formalisms, which leads to highlight some missing components present in one formalism and currently absent from another.


Automated Reasoning Using Possibilistic Logic: Semantics, Belief Revision and Variable Certainty Weights

Dubois, Didier, Lang, Jerome, Prade, Henri

arXiv.org Artificial Intelligence

In this paper an approach to automated deduction under uncertainty,based on possibilistic logic, is proposed ; for that purpose we deal with clauses weighted by a degree which is a lower bound of a necessity or a possibility measure, according to the nature of the uncertainty. Two resolution rules are used for coping with the different situations, and the refutation method can be generalized. Besides the lower bounds are allowed to be functions of variables involved in the clause, which gives hypothetical reasoning capabilities. The relation between our approach and the idea of minimizing abnormality is briefly discussed. In case where only lower bounds of necessity measures are involved, a semantics is proposed, in which the completeness of the extended resolution principle is proved. Moreover deduction from a partially inconsistent knowledge base can be managed in this approach and displays some form of non-monotonicity.